1,044 research outputs found

    Empirical consequences of symmetries

    Get PDF
    `Global' symmetries, such as the boost invariance of classical mechanics and special relativity, can give rise to direct empirical counterparts such as the Galileo-ship phenomenon. However, a widely accepted line of thought holds that `local' symmetries, such as the diffeomorphism invariance of general relativity and the gauge invariance of classical electromagnetism, have no such direct empirical counterparts. We argue against this line of thought. We develop a framework for analysing the relationship between Galileo-ship empirical phenomena and physical theories that model such phenomena that renders the relationship between theoretical and empirical symmetries transparent, and from which it follows that both global and local symmetries can give rise to Galileo-ship phenomena. In particular, we use this framework to exhibit analogs of Galileo's ship for both the diffeomorphism invariance of general relativity and the gauge invariance of electromagnetism.Comment: 31 pages including reference

    Banks and the Year 2000 Problem

    Get PDF

    Synthesis, screening, and sequencing of cysteine-rich one-bead one-compound peptide libraries.

    Get PDF
    Cysteine-rich peptides are valued as tags for biarsenical fluorophores and as environmentally important reagents for binding toxic heavy metals. Due to the inherent difficulties created by cysteine, the power of one-bead one-compound (OBOC) libraries has never been applied to the discovery of short cysteine-rich peptides. We have developed the first method for the synthesis, screening, and sequencing of cysteine-rich OBOC peptide libraries. First, we synthesized a heavily biased cysteine-rich OBOC library, incorporating 50% cysteine at each position (Ac-X8-KM-TentaGel). Then, we developed conditions for cysteine alkylation, cyanogen bromide cleavage, and direct MS/MS sequencing of that library at the single bead level. The sequencing efficiency of this library was comparable to a traditional cysteine-free library. To validate screening of cysteine-rich OBOC libraries, we reacted a library with the biarsenical FlAsH and identified beads bearing the known biarsenical-binding motif (CCXXCC). These results enable OBOC libraries to be used in high-throughput discovery of cysteine-rich peptides for protein tagging, environmental remediation of metal contaminants, or cysteine-rich pharmaceuticals

    Exploring the scope for Normalisation Process Theory to help evaluate and understand the processes involved when scaling up integrated models of care: a case study of the scaling up of the Gnosall Memory Service

    Get PDF
    Purpose: The scaling up of promising, innovative integration projects presents challenges to social and health care systems. Evidence that a new service provides (cost) effective care in a (pilot) locality can often leave us some way from understanding how the innovation worked and what was crucial about the context to achieve the goals evidenced when applied to other localities. Even unpacking the “black box” of the innovation can still leave gaps in understanding with regard to scaling it up. Theory-led approaches are increasingly proposed as a means of helping to address this knowledge gap in understanding implementation. Our particular interest here is exploring the potential use of theory to help with understanding scaling up integration models across sites. The theory under consideration is Normalisation Process Theory (NPT). Design/methodology/approach: The article draws on a natural experiment providing a range of data from two sites working to scale up a well-thought-of, innovative integrated, primary care-based dementia service to other primary care sites. This provided an opportunity to use NPT as a means of framing understanding to explore what the theory adds to considering issues contributing to the success or failure of such a scaling up project. Findings: NPT offers a framework to potentially develop greater consistency in understanding the roll out of models of integrated care. The knowledge gained here and through further application of NPT could be applied to inform evaluation and planning of scaling-up programmes in the future. Research limitations/implications: The research was limited in the data collected from the case study; nevertheless, in the context of an exploration of the use of the theory, the observations provided a practical context in which to begin to examine the usefulness of NPT prior to embarking on its use in more expensive, larger-scale studies. Practical implications: NPT provides a promising framework to better understand the detail of integrated service models from the point of view of what may contribute to their successful scaling up. Social implications: NPT potentially provides a helpful framework to understand and manage efforts to have new integrated service models more widely adopted in practice and to help ensure that models which are effective in the small scale develop effectively when scaled up. Originality/value: This paper examines the use of NPT as a theory to guide understanding of scaling up promising innovative integration service models

    Achieving Superscalar Performance without Superscalar Overheads - A Dataflow Compiler IR for Custom Computing

    Get PDF
    The difficulty of effectively parallelizing code for multicore processors, combined with the end of threshold voltage scaling has resulted in the problem of \u27Dark Silicon\u27, severely limiting performance scaling despite Moore\u27s Law. To address dark silicon, not only must we drastically improve the energy efficiency of computation, but due to Amdahl\u27s Law, we must do so without compromising sequential performance. Designers increasingly utilize custom hardware to dramatically improve both efficiency and performance in increasingly heterogeneous architectures. Unfortunately, while it efficiently accelerates numeric, data-parallel applications, custom hardware often exhibits poor performance on sequential code, so complex, power-hungry superscalar processors must still be utilized. This paper addresses the problem of improving sequential performance in custom hardware by (a) switching from a statically scheduled to a dynamically scheduled (dataflow) execution model, and (b) developing a new compiler IR for high-level synthesis that enables aggressive exposition of ILP even in the presence of complex control flow. This new IR is directly implemented as a static dataflow graph in hardware by our high-level synthesis tool-chain, and shows an average speedup of 1.13 times over equivalent hardware generated using LegUp, an existing HLS tool. In addition, our new IR allows us to further trade area & energy for performance, increasing the average speedup to 1.55 times, through loop unrolling, with a peak speedup of 4.05 times. Our custom hardware is able to approach the sequential cycle-counts of an Intel Nehalem Core i7 superscalar processor, while consuming on average only 0.25 times the energy of an in-order Altera Nios IIf processor
    • …
    corecore